Generalization in Interactive Networks: The Benefits of Inhibitory Competition and Hebbian Learning

نویسنده

  • Randall C. O'Reilly
چکیده

Computational models in cognitive neuroscience should ideally use biological properties and powerful computational principles to produce behavior consistent with psychological findings. Error-driven backpropagation is computationally powerful and has proven useful for modeling a range of psychological data but is not biologically plausible. Several approaches to implementing backpropagation in a biologically plausible fashion converge on the idea of using bidirectional activation propagation in interactive networks to convey error signals. This article demonstrates two main points about these error-driven interactive networks: (1) they generalize poorly due to attractor dynamics that interfere with the network's ability to produce novel combinatorial representations systematically in response to novel inputs, and (2) this generalization problem can be remedied by adding two widely used mechanistic principles, inhibitory competition and Hebbian learning, that can be independently motivated for a variety of biological, psychological, and computational reasons. Simulations using the Leabra algorithm, which combines the generalized recirculation (GeneRec), biologically plausible, error-driven learning algorithm with inhibitory competition and Hebbian learning, show that these mechanisms can result in good generalization in interactive networks. These results support the general conclusion that cognitive neuroscience models that incorporate the core mechanistic principles of interactivity, inhibitory competition, and error-driven and Hebbian learning satisfy a wider range of biological, psychological, and computational constraints than models employing a subset of these principles.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Contrastive Hebbian Learning in the Continuous Hopfield Model

This pape.r shows that contrastive Hebbian, the algorithm used in mean field learning, can be applied to any continuous Hopfield model. This implies that non-logistic activation functions as well as self connections are allowed. Contrary to previous approaches, the learning algorithm is derived without considering it a mean field approximation to Boltzmann machine learning. The paper includes a...

متن کامل

Exploring Biologically-Inspired Interactive Networks for Object Recognition

This thesis deals with biologically-inspired interactive neural networks used for the task of object recognition. Such networks offer an interesting alternative approach to traditional image processing techniques. Although the networks are very powerful classification tools, they are difficult to handle due to their bidirectional interactivity. This is one of the main reasons why these networks...

متن کامل

Improved Generalization in a Recurrent Hierarchical Network Using a Mixture of Error-driven and Hebbian Learning

Recurrent hierarchical networks form the basis of human vision, and have been demonstrated to be efficient in visual feature extraction. We have evaluated the generalization capability of a recurrent hierarchical network, used for recognition of cloud patterns in satellite images, through systematic testing of the amount of Hebbian learning used for training. Hebbian learning turned out to be e...

متن کامل

An interactive Hebbian account of lexically guided tuning of speech perception.

We describe an account of lexically guided tuning of speech perception based on interactive processing and Hebbian learning. Interactive feedback provides lexical information to prelexical levels, and Hebbian learning uses that information to retune the mapping from auditory input to prelexical representations of speech. Simulations of an extension of the TRACE model of speech perception are pr...

متن کامل

Robustness of Hebbian and Anti{hebbian Learning

Self{organizing neural networks with Hebbian and anti{Hebbian learning rules were found robust against variations in the parameters of neurons of the network, such as neural activities, learning rates and noisy inputs. Robustness was evaluated from the point of view of properties of soft competition for input correlations. Two models were studied: a neural network with presynaptic Hebbian learn...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neural computation

دوره 13 6  شماره 

صفحات  -

تاریخ انتشار 2001